Self-Supervised Audio-Visual Representation Learning with Relaxed Cross-Modal Synchronicity

Authors

  • Pritam Sarkar Queen's Univesity, Canada Vector Institute
  • Ali Etemad Queen's University, Canada

DOI:

https://doi.org/10.1609/aaai.v37i8.26162

Keywords:

ML: Unsupervised & Self-Supervised Learning, CV: Video Understanding & Activity Analysis, CV: Multi-modal Vision, CV: Representation Learning for Vision

Abstract

We present CrissCross, a self-supervised framework for learning audio-visual representations. A novel notion is introduced in our framework whereby in addition to learning the intra-modal and standard 'synchronous' cross-modal relations, CrissCross also learns 'asynchronous' cross-modal relationships. We perform in-depth studies showing that by relaxing the temporal synchronicity between the audio and visual modalities, the network learns strong generalized representations useful for a variety of downstream tasks. To pretrain our proposed solution, we use 3 different datasets with varying sizes, Kinetics-Sound, Kinetics400, and AudioSet. The learned representations are evaluated on a number of downstream tasks namely action recognition, sound classification, and action retrieval. Our experiments show that CrissCross either outperforms or achieves performances on par with the current state-of-the-art self-supervised methods on action recognition and action retrieval with UCF101 and HMDB51, as well as sound classification with ESC50 and DCASE. Moreover, CrissCross outperforms fully-supervised pretraining while pretrained on Kinetics-Sound.

Downloads

Published

2023-06-26

How to Cite

Sarkar, P., & Etemad, A. (2023). Self-Supervised Audio-Visual Representation Learning with Relaxed Cross-Modal Synchronicity. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9723-9732. https://doi.org/10.1609/aaai.v37i8.26162

Issue

Section

AAAI Technical Track on Machine Learning III